Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
BMJ ; 384: e078538, 2024 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-38508682

RESUMO

OBJECTIVES: To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. DESIGN: Repeated cross sectional analysis. SETTING: Publicly accessible LLMs. METHODS: In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. MAIN OUTCOME MEASURES: The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. RESULTS: Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. CONCLUSIONS: This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.


Assuntos
Camelídeos Americanos , Neoplasias Cutâneas , Humanos , Animais , Desinformação , Inteligência Artificial , Estudos Transversais , Protetores Solares , Idioma
2.
BMC Med ; 21(1): 400, 2023 10 23.
Artigo em Inglês | MEDLINE | ID: mdl-37872545

RESUMO

Data sharing is essential for promoting scientific discoveries and informed decision-making in clinical practice. In 2013, PhRMA/EFPIA recognised the importance of data sharing and supported initiatives to enhance clinical trial data transparency and promote scientific advancements. However, despite these commitments, recent investigations indicate significant scope for improvements in data sharing by the pharmaceutical industry. Drawing on a decade of literature and policy developments, this article presents perspectives from a multidisciplinary team of researchers, clinicians, and consumers. The focus is on policy and process updates to the PhRMA/EFPIA 2013 data sharing commitments, aiming to enhance the sharing and accessibility of participant-level data, clinical study reports, protocols, statistical analysis plans, lay summaries, and result publications from pharmaceutical industry-sponsored trials. The proposed updates provide clear recommendations regarding which data should be shared, when it should be shared, and under what conditions. The suggested improvements aim to develop a data sharing ecosystem that supports science and patient-centred care. Good data sharing principles require resources, time, and commitment. Notwithstanding these challenges, enhancing data sharing is necessary for efficient resource utilization, increased scientific collaboration, and better decision-making for patients and healthcare professionals.


Assuntos
Ensaios Clínicos como Assunto , Disseminação de Informação , Humanos , Políticas , Indústria Farmacêutica
3.
JAMA Oncol ; 8(9): 1310-1316, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35900732

RESUMO

Importance: Emerging policies drafted by the pharmaceutical industry indicate that they will transparently share clinical trial data. These data offer an unparalleled opportunity to advance evidence-based medicine and support decision-making. Objective: To evaluate the eligibility of independent, qualified researchers to access individual participant data (IPD) from oncology trials that supported US Food and Drug Administration (FDA) approval of new anticancer medicines within the past 10 years. Design, Setting, and Participants: In this quality improvement study, a cross-sectional analysis was performed of pivotal clinical trials whose results supported FDA-approved anticancer medicines between January 1, 2011, and June 30, 2021. These trials' results were identified from product labels. Exposures: Eligibility for IPD sharing was confirmed by identification of a public listing of the trial as eligible for sharing or by receipt of a positive response from the sponsor to a standardized inquiry. Main Outcomes and Measures: The main outcome was frequency of IPD sharing eligibility. Reasons for data sharing ineligibility were requested and collated, and company-, drug-, and trial-level subgroups were evaluated and presented using χ2 tests and forest plots. Results: During the 10-year period examined, 115 anticancer medicines were approved by the FDA on the basis of evidence from 304 pharmaceutical industry-sponsored trials. Of these trials, 136 (45%) were eligible for IPD sharing and 168 (55%) were not. Data sharing rates differed substantially among industry sponsors, with the most common reason for not sharing trial IPD being that the collection of long-term follow-up data was still ongoing (89 of 168 trials [53%]). Of the top 10 anticancer medicines by global sales, nivolumab, pembrolizumab, and pomalidomide had the lowest eligibility rates for data sharing (<10% of trials). Conclusions and Relevance: There has been a substantial increase in IPD sharing for industry-sponsored oncology trials over the past 5 years. However, this quality improvement study found that more than 50% of queried trials for FDA-approved anticancer medicines were ineligible for IPD sharing. Data accessibility would be substantially improved if, at the time of FDA registration of a medicine, all data that support the registration were made available.


Assuntos
Antineoplásicos , Neoplasias , Antineoplásicos/uso terapêutico , Estudos Transversais , Aprovação de Drogas , Humanos , Disseminação de Informação , Neoplasias/tratamento farmacológico , Nivolumabe , Preparações Farmacêuticas , Estados Unidos , United States Food and Drug Administration
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA